Structure Preserving Non-negative Feature Self-Representation for Unsupervised Feature Selection
نویسندگان
چکیده
منابع مشابه
Unsupervised Feature Selection by Preserving Stochastic Neighbors
Feature selection is an important technique for alleviating the curse of dimensionality. Unsupervised feature selection is more challenging than its supervised counterpart due to the lack of labels. In this paper, we present an effective method, Stochastic Neighborpreserving Feature Selection (SNFS), for selecting discriminative features in unsupervised setting. We employ the concept of stochas...
متن کاملFeature Selection for Unsupervised Learning
In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Se...
متن کاملUnsupervised Feature Selection Using Feature Similarity
ÐIn this article, we describe an unsupervised feature selection algorithm suitable for data sets, large in both dimension and size. The method is based on measuring similarity between features whereby redundancy therein is removed. This does not need any search and, therefore, is fast. A new feature similarity measure, called maximum information compression index, is introduced. The algorithm i...
متن کاملEmbedded Unsupervised Feature Selection
Sparse learning has been proven to be a powerful technique in supervised feature selection, which allows to embed feature selection into the classification (or regression) problem. In recent years, increasing attention has been on applying spare learning in unsupervised feature selection. Due to the lack of label information, the vast majority of these algorithms usually generate cluster labels...
متن کاملUnsupervised Personalized Feature Selection
Feature selection is effective in preparing high-dimensional data for a variety of learning tasks such as classification, clustering and anomaly detection. A vast majority of existing feature selection methods assume that all instances share some common patterns manifested in a subset of shared features. However, this assumption is not necessarily true in many domains where data instances could...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2017
ISSN: 2169-3536
DOI: 10.1109/access.2017.2699741